28 research outputs found

    Search for a heavy charged gauge boson decaying to a muon and a neutrino in 1 fb-1 of proton-proton collisions at sqrt(s) = 7 TeV using the ATLAS Detector

    Get PDF
    The ATLAS detector is used to search for high-mass states, such as heavy charged gauge bosons (W\u27), decaying to a muon and a neutrino. Results are presented based on the analysis of pp collisions at a center-of-mass energy of 7 TeV corresponding to an integrated luminosity of 1.04 fb-1. No excess beyond standard model expectations is observed. A W\u27 with sequential standard model couplings is excluded at 95% confidence level for masses below 1.98 TeV. Results from the muon channel are also combined with the electron channel to further extend the mass limit up to 2.15 TeV. This is the most stringent limit published to date

    CP violating anomalous top-quark couplings at the LHC

    Full text link
    We study the T odd correlations induced by CP violating anomalous top-quark couplings at both production and decay level in the process gg --> t t_bar --> (b mu+ nu_mu) (b_bar mu- nu_mu_bar). We consider several counting asymmetries at the parton level and find the ones with the most sensitivity to each of these anomalous couplings at the LHC.Comment: 14 LaTeX Pages, 1 EPS Figure, minor typos correcte

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Searches for supersymmetry with electroweak and third generation squark production at ATLAS

    No full text
    Searches for supersymmetric sleptons, neutralinos and charginos, and top and bottom squarks, have been conducted taking advantage of the large pp collision dataset recorded by ATLAS in 2015 and 2016 at a centre-of-mass energy of 13 TeV. These searches provide constraints on the most natural SUSY scenarios. The seminar will present the latest results from these searches, interpreted considering R-Parity conserving and violating supersymmetry scenarios.</p

    New Physics Searches with ATLAS

    No full text
    Highlights from new physics searches with the ATLAS detector at the CERN Large Hadron Collider are presented. Results are based on the analysis of data collected in pp collisions at a center-of-mass energy of 7 TeV corresponding to integrated luminosities of 1-5 fb−1. No excess beyond the Standard Model expectations is observed

    SUSY at ATLAS

    No full text
    SUSY: News from Run 2 searche

    Searches for Higgsinos and related challenges in ATLAS

    No full text
    Natural models of Supersymmetry (SUSY) typically favour existence of light Higgsinos. Models in which the only accessible SUSY particles are charginos and neutralinos that are predominantly Higgsinos tend to have low mass splitting between these particles. Such models, commonly referred to as compressed models, lead to final states including low-momentum leptons and disappearing tracks that are experimentally challenging to characterise. In this proceeding, the latest results from Higgsino searches that are conducted taking advantage of the large pppp collision dataset recorded by ATLAS in 2015 and 2016 at a centre-of-mass energy of 13 TeV are presented

    Shared I/O Developments for Run 3 in the ATLAS Experiment

    No full text
    The ATLAS experiment extensively uses multi-process (MP) parallelism to maximize data-throughput especially in I/O intensive workflows, such as the production of Derived Analysis Object Data (DAOD). In this mode, worker processes are spawned at the end of job initialization, thereby sharing memory allocated thus far. Each worker then loops over a unique set of events and produces its own output file, which in the original implementation needed to be merged at a subsequent step that would be executed serially. In Run 2, SharedWriter was introduced to perform this task on-the-fly, with an additional process merging data from the workers while the job was running, eliminating the need for the extra merging step. Although this approach had been very successful, there was room for improvements, most notably in the event-throughput scaling as a function of the number of workers. This was limited by the fact that the Run 2 version does all data compression within the SharedWriter process. For Run 3, a new version of SharedWriter has been written to address the limitations of the original implementation by moving compression of data to the worker processes. This development also paves the way for using it in a hybrid mode of multi-thread (MT) and MP workflows to maximize the I/O efficiency. In this talk, we will discuss the latest developments in Shared I/O in the ATLAS experiment

    Shared I/O Developments for Run 3 in the ATLAS Experiment

    No full text
    The ATLAS experiment extensively uses multi-process (MP) parallelism to maximize data-throughput especially in I/O intensive workflows, such as the production of Derived Analysis Object Data (DAOD). In this mode, worker processes are spawned at the end of job initialization, thereby sharing memory allocated thus far. Each worker then loops over a unique set of events and produces its own output file, which in the original implementation needed to be merged at a subsequent step that would be executed serially. In Run 2, SharedWriter was introduced to perform this task on-the-fly, with an additional process merging data from the workers while the job was running, eliminating the need for the extra merging step. Although this approach had been very successful, there was room for improvements, most notably in the event-throughput scaling as a function of the number of workers. This was limited by the fact that the Run 2 version does all data compression within the SharedWriter process. For Run 3, a new version of SharedWriter has been written to address the limitations of the original implementation by moving compression of data to the worker processes. This development also paves the way for using it in a hybrid mode of multi-thread (MT) and MP workflows to maximize the I/O efficiency. In this talk, we will discuss the latest developments in Shared I/O in the ATLAS experiment

    Optimizing ATLAS data storage: the impact of compression algorithms on ATLAS physics analysis data formats

    No full text
    The increased footprint foreseen for Run-3 and HL-LHC data will soon expose the limits of currently available storage and CPU resources. Data formats are already optimized according to the processing chain for which they are designed. ATLAS events are stored in ROOT-based reconstruction output files called Analysis Object Data (AOD), which are then processed within the derivation framework to produce Derived AOD (DAOD) files. Numerous DAOD formats, tailored for specific physics and performance groups, have been in use throughout the ATLAS Run-2 phase. In view of Run-3, ATLAS has changed its Analysis Model, which entailed a significant reduction of the existing DAOD flavors. Two new, unfiltered and skimmable on read, formats have been proposed as replacements: DAOD_PHYS, designed to meet the requirements of the majority of the analysis workflows, and DAOD_PHYSLITE, a smaller format containing already calibrated physics objects. As ROOT-based formats, they natively support four lossless compression algorithms: Lzma, Lz4, Zlib and Zstd. In this study, the effects of different compression settings on file size, compression time, compression factor and reading speed are investigated considering both DAOD_PHYS and DAOD_PHYSLITE formats. Total as well as partial event reading strategies have been tested. Moreover, the impact of AutoFlush and SplitLevel, two parameters controlling how in-memory data structures are serialized to ROOT files, has been evaluated. This study yields quantitative results that can serve as a paradigm on how to make compression decisions for different ATLAS' use cases. As an example, for both DAOD_PHYS and DAOD_PHYSLITE, the Lz4 library exhibits the fastest reading speed, but results in the largest files, whereas the Lzma algorithm provides larger compression factors at the cost of significantly slower reading speeds. In addition, guidelines for setting appropriate AutoFlush and SplitLevel values are outlined
    corecore